Giving Robots Rights or Rites?

Exploring the ethics of human-robot interactions, Tae Wan Kim questions whether robots should have rights or be treated as objects, advocating for a Confucian perspective that emphasizes communal responsibilities and rituals in collaborations.

Giving Robots Rights or Rites?

Boston Dynamics recently showcased Atlas, a six-foot-tall bipedal humanoid robot, designed primarily for search and rescue missions. The release of a video featuring Atlas, which included scenes where the robot was seemingly mistreated by employees (kicked, hit with a hockey stick, and pushed with a heavy ball), sparked a widespread debate.

This incident raised questions about the appropriate ways to interact with robots. On the one hand, a robot is essentially a combination of software and hardware, similar to a laptop computer. If the robot is personal property and its rough treatment doesn’t harm anyone or violate any rights, such actions might not be considered inherently wrong, though they would be deemed unwise.

However, numerous philosophers and legal experts support the idea that robots can have moral and legal considerations, including rights. This viewpoint aligns with how certain non-human entities, like corporations, are legally treated as persons with constitutional rights. Moreover, moral and legal considerations extend beyond humans to other species, as evidenced by the ethical constraints on animal testing in many developed societies.

I am worried. While this might seem like a question of semantics, in reality, adopting a rights-based approach often leads to conflicting claims, creating a dynamic of opposition that complicates many situations. Take, for example, a situation where you refuse to do household chores for your spouse, claiming it as your individual right. However, if your spouse also claims the same right, this could create a problem. If you did not know this person or care how they felt, you might find a way to adjudicate this conflict, but in a personal relationship it can be much more difficult.

Applying this framework in human-robot interactions risks creating a similarly adversarial dynamic which could be tricky, as well as frustrating, since we invented them to help make things easier for us, but now they could claim to have the right to refuse to do so.

Of course, it would be simpler to continue treating robots as objects. However, some argue that technological advancements are endowing robots with the mental faculties necessary for moral consideration. Specifically, the perspective of functionalism, applied to the realm of ethics, proposes that the mental faculties necessary for moral consideration depend not on internal states or properties, but on the functional capacities an entity possesses.

The crucial functional capacities for moral consideration involve the ability to communicate and interact, to engage in reasoning and decision-making, and to be emotionally-responsive. Since many robots are increasingly capable in all these areas, under functionalism, it would be hard to argue against their potential for moral agency, which means they can reason about the rights of others —and their own. 

Applying this framework in human-robot interactions risks creating a similarly adversarial dynamic which could be tricky, as well as frustrating, since we invented them to help make things easier for us, but now they could claim to have the right to refuse to do so.

Of course, it would be simpler to continue treating robots as objects. However, some argue that technological advancements are endowing robots with the mental faculties necessary for moral consideration. Specifically, the perspective of functionalism, applied to the realm of ethics, proposes that the mental faculties necessary for moral consideration depend not on internal states or properties, but on the functional capacities an entity possesses.

The crucial functional capacities for moral consideration involve the ability to communicate and interact, to engage in reasoning and decision-making, and to be emotionally responsive. Since many robots are increasingly capable in all these areas, under functionalism, it would be hard to argue against their potential for moral agency, which means they can reason about the rights of others —and their own. 

I propose that a better way to think about the situation is from the perspective of Confucianism, an ancient philosophy that views moral agents as bearers of rites, not rights. Confucianism emphasizes the importance of communal and relational self instead of solely personal self-interest. The concept of ‘humanness’ (仁, ren) in Chinese literally means “two people”. The term ‘li’ (禮), meaning rite or ritual, is about more than just practicality; it symbolizes the sacred arrangement of vessels in a religious context. This is illustrated in the Analects, where Confucius responds to a pupil’s inquiry about what kind of person he is by comparing him to a jade sacrificial vessel, signifying the sacred nature derived from participating in rites. Confucianism sees the moral sanctity of individuals as being elevated through their engagement in proper rituals, which inherently involve their responsibility to others. 

Consider a ballet performance. The beauty of the ballet is not based on the individual beauty of each dancer; it arises through the dancers’ fulfillment of their roles,  resulting in the overall beauty of the group. Just like a ballet requires dancers to cooperate in executing their shared choreography, the social harmony of human society requires that each participant maintains certain responsibilities to each person and the group as a whole. Focusing on each person’s role in these interactions solely regarding their individual “rights” is not congruent with harmonizing the group to meet their shared objectives.

For instance, consider a basketball team: imagine if the center player claimed that the point guard infringes on her contractual or property rights or fails to maximize her interests by not passing the ball correctly. If team members viewed every play solely through the lens of their individual rights, the team as a whole could never be successful, as it would erode the very essence of teamwork.

A white, round robot with a smiling expression. The robot's features include a spherical body, two small eyes, and a curved line representing a cheerful smile.

WATCH: Tae Wan Kim speaks with Communications of the ACM

We can apply this notion of “rites” vs. “rights” to the context of human-robot interaction. Imagine you’re a Boston Dynamics researcher working on Atlas, the humanoid robot, to develop capabilities to address a nuclear disaster. You engage with the robot in what is known as “human-computer/robot interaction” (HCI/HRI), a team activity or an “ensemble.” The focus here is on the interaction (I) itself, rather than on the individual entities involved, such as the human (H) or the robot (R). Both you and Atlas work towards a shared goal as you develop Atlas’s ability to fulfill first responder duties, and are rites-bearers in this scenario with your own sets of responsibilities defined by this shared objective.

Suppose you, as the researcher, get over-confident in your progress and carelessly put Atlas in a risky situation, endangering the project. In a rights-based framework, someone might argue that you infringed upon Atlas’s right to safety; however, as a rites-bearer, the claim would be that you failed to fulfill your role-specific duties for the team to achieve the team’s objectives. These objectives include the project goals, but also encompass a range of values, benefits, and ideals. 

Achieving the shared objective of the team (often established by the human) is not the only responsibility the rites-bearers maintain; there is also a broader responsibility to consider the human’s well-being and the well-being of the broader society which could be in conflict with the human’s immediate desires for their collaboration. For example, what if a human was using a robot to compound and continuously administer highly addictive drugs to them so that they could remain in a continuously mentally altered state? The human may believe that they have the “right” to demand the robot’s compliance since they purchased or built the robot for this purpose, but does the robot live up to its responsibilities as a rites-bearer if they cooperate? Doing so would almost certainly harm or possibly kill the human, with cascading effects in the broader society around them such as on others who are connected to or relying on the human.

Imagine, however, in this same situation, the human was terminally ill, and the robot was essential to delivering palliative care to enable them to be comfortable for the short time remaining in their life. Most would view the robot’s cooperation as consistent with its responsibilities as a rites-bearer in those circumstances as its actions would be aligned with its responsibility for others’ well-being. 

Many people may want access to a drug-administering robot even when they are not gravely ill. There are some manufacturers who might gladly seek to profit by creating such robots and selling them. An important question relates to the manufacturer’s responsibility, and by extension the robot’s, as rites-bearers in this situation. Consumers might seek pleasure from the robot, but a life focused solely on pleasure is not always ethically sound. Society needs to address the teleological question of the purpose of a robot and its interaction with humans. I argue that in human-robot collaboration, both parties form a team or ensemble, each with its own role obligations. Both must perform and observe the rites that promote social order and harmony. 

While the arguments about the moral agency of robots by functionalism are provocative, as are related notions that robots have “rights,” many are skeptical of such claims. A major criticism of functionalism is its discounting the crucial conscious and subjective components of human mental experiences in its arguments. One could also ask if human-like consciousness is necessary for a robot to be a rites-bearer and a full participant, alongside humans, in rituals. I assert that this is indeed possible.

For instance, there is widespread reverence for sacred places, such as the Japanese tradition of venerating mountains, which are afforded sacred respect. This suggests that robots could also be regarded with a similar level of respect and included in rituals. However, even if we could conceive of venerating robots, the deeper question is whether or not we should.

A Confucian perspective might claim that by creating robots in our likeness, failing to pay them respect as beings capable of engaging in rites actually diminishes our own humanity. By engaging in rituals with robots, we honor ourselves. However, a phenomenological skeptic might refute that by arguing that robots don’t truly mirror our essence. Yet, the resemblance to our image is a matter of degree. To invoke a dangerous analogy, according to many of the great religions, God made us in His image, and for that reason finds it appropriate to respectfully interact with and care about us, but the extent to which we succeed in resembling God is quite limited.